---
title:  Portable Prediction Server
description: How to use the Portable Prediction Server (PPS), which executes a DataRobot model package distributed as a self-contained Docker image.

---

# Portable Prediction Server {: #portable-prediction-server }

The Portable Prediction Server (PPS) is a DataRobot execution environment for DataRobot model packages (`.mlpkg` files) distributed as a self-contained Docker image. After you configure the Portable Prediction Server, you can begin running [single or multi model portable real-time predictions](pps-run-modes) and [portable batch prediction](portable-batch-predictions) jobs.

!!! important
     DataRobot strongly recommends using an Intel CPU to run the Portable Prediction Server. Using non-Intel CPUs can result in prediction inconsistencies, especially in deep learning models like those built with Tensorflow or Keras.

The general configuration steps are:

* Download the model package.
* Download the PPS Docker image.
* Load the PPS image to Docker.
* Copy the Docker snippet DataRobot provides to run the Portable Prediction Server in your Docker container.

!!! important
    If you want to configure the Portable Prediction Server for a model through a deployment, you must first add an [external prediction environment](pred-env#add-an-external-prediction-environment) and deploy that model to an external environment.

## Download the model package {: #download-the-model-package }

You can download a PPS model package for a deployed DataRobot model running on an [external prediction environment](pred-env#add-an-external-prediction-environment). In addition, with the correct MLOps permissions, you can download a model package from the Leaderboard. You can then run prediction jobs with a portable prediction server outside of DataRobot.

=== "Deployment download (with monitoring)"

    When you download a model package from a deployment, the Portable Prediction Server will [monitor](pps-run-modes#monitoring) your model for performance and track prediction statistics; however, you must ensure that your deployment supports model package downloads. The deployment must have a _DataRobot_ build environment and an _external_ prediction environment, which you can verify using the [**Governance Lens**](gov-lens) in the deployment inventory:

    ![](images/pps-1.png)

    ??? tip "What if a deployment doesn't have an external prediction environment?"
        If the deployed model you want to run in the Portable Prediction Server isn't associated with an external prediction environment, you can do either of the following:

        * Create a new deployment with an external prediction environment.
        * If you have the correct permissions, download the model package from the Leaderboard.

        If you access a deployment that doesn't support model package download, you can quickly navigate to the Leaderboard from the deployment:

        1. Click the **Model** name (on the **Overview** tab) to open the model package in the Model Registry.
        2. In the **Model Registry**, click the **Model Name** (on the **Package Info** tab) to open the model on the Leaderboard.
        3. On the **Leaderboard**, download the **Portable Prediction Server** model package from the **Predict** > **Portable Predictions** tab.

        When you download the model package from the Leaderboard, the Portable Prediction Server won't monitor your model for performance or track prediction statistics.

    On the **Deployments** tab (the *deployment inventory*), open a deployment with both a DataRobot build environment and an *external* prediction environment, and then navigate to the **Predictions > Portable Predictions** tab:

    ![](images/portable-batch-tab-callouts.png)

    | | Element | Description |
    |---|---|---|
    | ![](images/icon-1.png) | Portable Prediction Server | Helps you configure a REST API-based prediction server as a Docker image.   |
    | ![](images/icon-2.png) | Portable Prediction Server Usage | Links to the **Developer Tools** tab where you [obtain the Portable Prediction Server Docker image](#obtain-the-pps-docker-image). |
    | ![](images/icon-3.png) | Download model package (.mlpkg) | Downloads the model package for your deployed model. Alternatively, you can download the model package from the Leaderboard.  |
    | ![](images/icon-4.png) | Docker snippet | After you download your model package, use the Docker snippet to launch the Portable Prediction Server for the model with monitoring enabled. You will need to specify your [API key](api-key-mgmt#access-api-key-management), local filenames, paths, and [monitoring](#monitoring) before launching. |
    | ![](images/icon-5.png) | Copy to clipboard | Copies the Docker snippet to your clipboard so that you can paste it on the command line.  |

    In the **Predictions > Portable Predictions** tab, click **Download model package**. The download appears in the downloads bar when complete.

    ![](images/pps-2.png)

    After downloading the model package, click **Copy to clipboard** and save the code snippet for later. You need this code to launch the Portable Prediction Server for the downloaded model package.

    ![](images/pps-4.png)

=== "Leaderboard download"

    !!! info "Availability information"
        The ability to download a model package from the Leaderboard depends on the [MLOps configuration](pricing) for your organization.

    If you have built a model with AutoML and want to download its model package for use with the Portable Prediction Server, navigate to the model on the Leaderboard and select the **Predict > Portable Predictions** tab.

    ![](images/pps-5.png)

    !!! note
        When downloaded from the Leaderboard, the Portable Prediction Server won't [monitor](pps-run-modes#monitoring) your model for performance or track prediction statistics.

    Click **Download .mlpkg**. After downloading the model package, click **Copy to clipboard** and save the code snippet for later. You need this code to launch the Portable Prediction Server for the downloaded model package.

## Configure the Portable Prediction Server {: #configure-the-portable-prediction-server }

To deploy the model package you downloaded to the Portable Prediction Server, you must first download the PPS Docker image and then load that image to Docker.

### Obtain the PPS Docker image {: #obtain-the-pps-docker-image }

Navigate to the **Developer Tools** tab to download the [Portable Prediction Server Docker image](api-key-mgmt#portable-prediction-server-docker-image). Depending on your DataRobot environment and version, options for accessing the latest image may differ, as described in the table below.

|  Deployment type |  Software version  |  Access method  |
|------------------|--------------------|-----------------|
| Self-Managed AI Platform | v6.3 or older | Contact your DataRobot representative. The image will be provided upon request. |
| Self-Managed AI Platform | v7.0 or later | [Download](api-key-mgmt#portable-prediction-server-docker-image) the image from **Developer Tools**; install as described [below](#load-the-image-to-docker). If the image is not available contact your DataRobot representative. |
| Managed AI Platform | Jan 2021 and later | [Download](api-key-mgmt#portable-prediction-server-docker-image) the image from **Developer Tools**; install as described [below](#load-the-image-to-docker).|

### Load the image to Docker {: #load-the-image-to-docker }

!!! warning
     DataRobot is working to reduce image size; however, the compressed Docker image can exceed 6GB (Docker-loaded image layers can exceed 14GB). Consider these sizes when downloading and importing PPS images.

Before proceeding, make sure you have downloaded the image from [Developer Tools](api-key-mgmt#portable-prediction-server-docker-image). It is a `gzip`'ed tar archive that can be loaded by Docker.

Once downloaded and the file checksum is verified, use [`docker load`](https://docs.docker.com/engine/reference/commandline/load/){ target=_blank} to load the image. You do not have to uncompress the downloaded file because Docker supports loading images from `gzip`'ed tar archives natively.

=== "Load image to Docker"

    Copy the command below, replace `<version>`, and run the command to load the PPS image to Docker:

    ``` sh
    docker load < datarobot-portable-prediction-api-<version>.tar.gz
    ```

    !!! note
        If the PPS file isn't located in the current directory, you need to provide a local, absolute filepath to the tar file (for example, `/path/to/datarobot-portable-prediction-api-<version>.tar.gz`).

=== "Example: Load image to Docker"

    After running the `docker load` command for your PPS file, you should see output similar to the following:

    ``` sh
    docker load < datarobot-portable-prediction-api-9.0.0-r4582.tar.gz
    33204bfe17ee: Loading layer [==================================================>]  214.1MB/214.1MB
    62c077c42637: Loading layer [==================================================>]  3.584kB/3.584kB
    54475c7b6aee: Loading layer [==================================================>]  30.21kB/30.21kB
    0f91625c248c: Loading layer [==================================================>]  3.072kB/3.072kB
    21c5127d921b: Loading layer [==================================================>]  27.05MB/27.05MB
    91feb2d07e73: Loading layer [==================================================>]  421.4kB/421.4kB
    12ca493d22d9: Loading layer [==================================================>]  41.61MB/41.61MB
    ffb6e915efe7: Loading layer [==================================================>]  26.55MB/26.55MB
    83e2c4ee6761: Loading layer [==================================================>]  5.632kB/5.632kB
    109bf21d51e0: Loading layer [==================================================>]  3.093MB/3.093MB
    d5ebeca35cd2: Loading layer [==================================================>]  646.6MB/646.6MB
    f72ea73370ce: Loading layer [==================================================>]  1.108GB/1.108GB
    4ecb5fe1d7c7: Loading layer [==================================================>]  1.844GB/1.844GB
    d5d87d53ea21: Loading layer [==================================================>]  71.79MB/71.79MB
    34e5df35e3cf: Loading layer [==================================================>]  187.3MB/187.3MB
    38ccf3dd09eb: Loading layer [==================================================>]  995.5MB/995.5MB
    fc5583d56a81: Loading layer [==================================================>]  3.584kB/3.584kB
    c51face886fc: Loading layer [==================================================>]    402MB/402MB
    c6017c1b6604: Loading layer [==================================================>]  1.465GB/1.465GB
    7a879d3cd431: Loading layer [==================================================>]  166.6MB/166.6MB
    8c2f17f7a166: Loading layer [==================================================>]  188.7MB/188.7MB
    059189864c15: Loading layer [==================================================>]  115.9MB/115.9MB
    991f5ac99c29: Loading layer [==================================================>]  3.072kB/3.072kB
    f6bbaa29a1c6: Loading layer [==================================================>]   2.56kB/2.56kB
    4a0a241b3aab: Loading layer [==================================================>]  415.7kB/415.7kB
    3d509cf1aa18: Loading layer [==================================================>]  5.632kB/5.632kB
    a611f162b44f: Loading layer [==================================================>]  1.701MB/1.701MB
    0135aa7d76a0: Loading layer [==================================================>]  6.766MB/6.766MB
    fe5890c6ddfc: Loading layer [==================================================>]  4.096kB/4.096kB
    d2f4df5f0344: Loading layer [==================================================>]  5.875GB/5.875GB
    1a1a6aa8556e: Loading layer [==================================================>]  10.24kB/10.24kB
    77fcb6e243d1: Loading layer [==================================================>]  12.97MB/12.97MB
    7749d3ff03bb: Loading layer [==================================================>]  4.096kB/4.096kB
    29de05e7fdb3: Loading layer [==================================================>]  3.072kB/3.072kB
    2579aba98176: Loading layer [==================================================>]  4.698MB/4.698MB
    5f3d150f5680: Loading layer [==================================================>]  4.699MB/4.699MB
    1f63989f2175: Loading layer [==================================================>]  3.798GB/3.798GB
    3e722f5814f1: Loading layer [==================================================>]  182.3kB/182.3kB
    b248981a0c7e: Loading layer [==================================================>]  3.072kB/3.072kB
    b104fa769b35: Loading layer [==================================================>]  4.096kB/4.096kB
    Loaded image: datarobot/datarobot-portable-prediction-api:9.0.0-r4582
    ```

Once the `docker load` command completes successfully with the `Loaded image` message, you should verify that the image is loaded with the [`docker images`](https://docs.docker.com/engine/reference/commandline/images/){ target=_blank} command:

=== "View loaded images"

    Copy the command below and run it to view a list of the images in Docker:

    ``` sh
    docker images
    ```

=== "Example: View loaded images"

    In this example, you can see the `datarobot/datarobot-portable-prediction-api` image loaded in the previous step:

    ``` sh
    docker images
    REPOSITORY                                    TAG           IMAGE ID       CREATED        SIZE
    datarobot/datarobot-portable-prediction-api   9.0.0-r4582   df38ea008767   29 hours ago   17GB
    ```

!!! tip
    Optionally, to save disk space, you can delete the compressed image archive `datarobot-portable-prediction-api-<version>.tar.gz` after your Docker image loads successfully.


## Launch the PPS with the code snippet {: #launch-the-pps-with-the-code-snippet }

After you've downloaded the model package and configured the Docker PPS image, you can use the associated [`docker run`](https://docs.docker.com/engine/reference/commandline/run/){ target=_blank} code snippet to launch the Portable Prediction Server with the downloaded model package.

=== "Deployment code snippet (with monitoring)"

    In the example code snippet below from a deployed model, you should configure the following highlighted options:

    ``` sh linenums="1" hl_lines="3 4 9 10"
    docker run \
    -p 8080:8080 \
    -v <local path to model package>/:/opt/ml/model/ \
    -e PREDICTION_API_MODEL_REPOSITORY_PATH="/opt/ml/model/<model package file name>" \
    -e PREDICTION_API_MONITORING_ENABLED="True" \
    -e MLOPS_DEPLOYMENT_ID="6387928ebc3a099085be32b7" \
    -e MONITORING_AGENT="True" \
    -e MONITORING_AGENT_DATAROBOT_APP_URL="https://app.datarobot.com" \
    -e MONITORING_AGENT_DATAROBOT_APP_TOKEN="<your api token>" \
    datarobot-portable-prediction-api
    ```

    * `-v <local path to model package>/:/opt/ml/model/ \`: Provide the local, absolute file path to the location of the model package you downloaded. The `-v` (or `--volume`) option bind mounts a volume, adding the contents of your local model package directory (at `<local path to model package>`) to your Docker container's `/opt/ml/model` volume.

    * `-e PREDICTION_API_MODEL_REPOSITORY_PATH="/opt/ml/model/<model package file name>" \`: Provide the file name of the model package mounted to the `/opt/ml/model/` volume. This sets the `PREDICTION_API_MODEL_REPOSITORY_PATH` environment variable, indicating where the PPS can find the model package.

    * `-e MONITORING_AGENT_DATAROBOT_APP_TOKEN="<your api token>" \`: Provide your API token from the DataRobot Developer Tools for monitoring purposes. This sets the `MONITORING_AGENT_DATAROBOT_APP_TOKEN` environment variable, where the PPS can find your API key.

    * `datarobot-portable-prediction-api`: Replace this line with the image name and version of the PPS image you're using. For example, `datarobot/datarobot-portable-prediction-api:<version>`.

=== "Leaderboard code snippet"

    In the example code snippet below for a Leaderboard model, you should configure the following highlighted options:

    ``` linenums="1" hl_lines="3 4 5"
    docker run \
    -p 8080:8080 \
    -v <local path to model package>/:/opt/ml/model/ \
    -e PREDICTION_API_MODEL_REPOSITORY_PATH="/opt/ml/model/<model package file name>" \
    datarobot-portable-prediction-api
    ```

    * `-v <local path to model package>/:/opt/ml/model/ \`: Provide the local, absolute file path to the directory containing the model package you downloaded. The `-v` (or `--volume`) option bind mounts a volume, adding the contents of your local model package directory (at `<local path to model package>`) to your Docker container's `/opt/ml/model` volume.

    * `-e PREDICTION_API_MODEL_REPOSITORY_PATH="/opt/ml/model/<model package file name>" \`: Provide the file name of the model package mounted to the `/opt/ml/model/` volume. This sets the `PREDICTION_API_MODEL_REPOSITORY_PATH` environment variable, indicating where the PPS can find the model package.

    * `datarobot-portable-prediction-api`: Replace this line with the image name and version of the PPS image you're using. For example, `datarobot/datarobot-portable-prediction-api:<version>`.

??? tip "Use docker tag to name and tag an image"

    Alternatively, you can keep `datarobot-portable-prediction-api` in the last line if you use [`docker tag`](https://docs.docker.com/engine/reference/commandline/tag/){ target=_blank} to tag the new image as `latest` and rename it to `datarobot-portable-prediction-api`.

    In this example, Docker renames the image and replaces the `9.0.0-r4582` tag with the `latest` tag:

    ``` sh
    docker tag datarobot/datarobot-portable-prediction-api:9.0.0-r4582 datarobot-portable-prediction-api:latest
    ```

    To verify the new tag and name, you can use the `docker images` command again:

    ``` sh
    docker images
    REPOSITORY                                    TAG           IMAGE ID       CREATED        SIZE
    datarobot/datarobot-portable-prediction-api   9.0.0-r4582   df38ea008767   29 hours ago   17GB
    datarobot-portable-prediction-api             latest        df38ea008767   29 hours ago   17GB
    ```

After completing the setup, you can use the Docker snippet to [run single or multi model portable real-time predictions](pps-run-modes) or [run portable batch predictions](portable-batch-predictions#run-portable-batch-predictions). See also [additional examples](portable-batch-predictions#more-examples) for prediction jobs using PPS. The PPS can be run disconnected from the main DataRobot installation environments. Once started, the image serves HTTP API via the `:8080` port.

??? important "Run the PPS for FIPS-enabled model packages"
    If you configure your DataRobot cluster with `ENABLE_FIPS_140_2_MODE: true`  (in the `config.yaml` file at the cluster level), that cluster builds MLKPG files that require that you to launch the PPS with `ENABLE_FIPS_140_2_MODE: true`. For this reason, you can’t host FIPS-enabled models and standard models in the same PPS instance.

    To run the PPS with support for FIPS-enabled models, you can include the following argument in the [`docker run`](https://docs.docker.com/engine/reference/commandline/run/){ target=_blank} command:
    
    ``` sh
    -e  ENABLE_FIPS_140_2_MODE="true"
    ```

    The full command for PPS container startup would look like the following example:

    ``` sh
    docker run 
    -td 
    -p 8080:8080 
    -e PYTHON3_SERVICES="true" 
    -e ENABLE_FIPS_140_2_MODE="true" 
    -v <local path to model package>/:/opt/ml/model 
    --name portable_predictions_server  
    --rm datarobot/datarobot-portable-prediction-api:<version>
    ```
